From the textbook Business Dynamics: Systems Thinking and Modeling for a Complex World by John D Sterman, Section 16.5 ("Implications for Forecast Consumers"):
The results suggest important lessons for forecasters and especially for managers and decision makers who must choose which forecasts and forecasting methods to buy.
First, most forecasts are not very good. Forecasts are most accurate when the underlying dynamics are stable, as when predicting the influence of regular phenomena such as seasonal variations. But forecasting methods are particularly poor when there are changes in trends, noise, and other sources of turbulence. These are precisely the times when people are most interested in forecasts.
Second, most forecasting methods frequently miss changes in trends and turning points in cycles, lagging behind rather than anticipating them. The systematic errors in forecasts of inflation, commodity prices, energy use, and other variables strongly suggest adaptive expectations and simple trend extrapolation often dominate professional forecasts. These methods do correct errors over time, but because they involve smoothing past data, they inevitably introduce delays that cause the forecasts to miss key turning points and shifts in growth rates.
Third, smoothing and extrapolation of the past trend in the variable itself seems to dominate other considerations in forecasting. Though forecasters often claim to (and indeed may) examine a wide range of variables in making their forecasts, past values and past trends strongly anchor their forecasts. The influence of other variables is weak because their connections to the target variable are poorly understood, unstable, noisy, and ambiguous. Forecasters often behave as if they were using simple smoothing and naive extrapolation even when they are using complicated formal models. They adjust the parameters and values of exogenous inputs until the output of the model is "reasonable," that is, until it matches their intuition. Intuition, however, is biased by a variety of judgmental heuristics and tends to be strongly anchored to recent trends.
Fourth, forecasters tend to underestimate uncertainty in their forecasts, often failing to provide a range, alternative scenarios, or a list of factors to which their forecasts are sensitive (see the overconfidence bias, section 8.2.5).
How then can managers improve the value they get from forecasts? Fight against the overconfidence bias by explicitly challenging assumptions and asking how your expectations might be wrong (for practical examples, see Russo and Schoemaker 1989). Require forecasters to document their assumptions, make their data sources explicit, and specify the methods they are using. Don't allow forecasters to use add factoring (chapter 21 discusses standards for replicability and rigor in modeling).
Even so, improving forecast accuracy is difficult. The best way to improve the benefit/cost ratio of forecasting is to reduce the cost. The projections of expensive forecasting services and models tend to be dominated by smoothing and trend extrapolation. Managers can save a great deal of money by smoothing and extrapolating the data themselves. Forecast accuracy may not improve, but the cost of acquiring the forecasts will fall.
Finally, focus on the development of decision rules and strategies that are robust to the inevitable forecast errors. The real value of modeling is not to anticipate and react to problems in the environment but to eliminate the problems by changing the underlying structure of the system. Modelers and their clients should be designers, not diviners. In the words of Antoine de Saint-Exupèry, "As for the future, your task is not to foresee, but to enable it."
(cf. Transient Behavior (1999-05-11), Fifth Disciplinarians (2000-09-10), ...) - ^z - 2017-07-05